Goto

Collaborating Authors

 energy-based policy


A Learning and Sampling

Neural Information Processing Systems

A.1 Deep generative modelling A complete trajectory is denoted by ζ " t s The log-likelihood function is: Lpθ q " ÿ Applying this simple identiy, we also have: 0 " E On the other hand, it discourages action samples directly sampled from the prior. To ensure the transition model's validity, it needs to be grounded in real-world dynamics when jointly learned with the policy. Otherwise, the agent would be purely hallucinating based on the demonstrations. It would not be a problem if the action space is quantized. Intuitively, action samples at each step are updated with the energy of all subsequent actions and a single-step forward by back-propagation. To train the policy, Eq. (8) can now be rewritten as δ Eq. (5) is an empirical estimate of E We first prove the construction above is valid at optimality.



Vision-Language Navigation with Energy-Based Policy

Neural Information Processing Systems

Vision-language navigation (VLN) requires an agent to execute actions following human instructions. Existing VLN models are optimized through expert demonstrations by supervised behavioural cloning or incorporating manual reward engineering. While straightforward, these efforts overlook the accumulation of errors in the Markov decision process, and struggle to match the distribution of the expert policy. Going beyond this, we propose an Energy-based Navigation Policy (ENP) to model the joint state-action distribution using an energy-based model. At each step, low energy values correspond to the state-action pairs that the expert is most likely to perform, and vice versa.


Sampling from Energy-based Policies using Diffusion

Jain, Vineet, Akhound-Sadegh, Tara, Ravanbakhsh, Siamak

arXiv.org Artificial Intelligence

Energy-based policies offer a flexible framework for modeling complex, multimodal behaviors in reinforcement learning (RL). In maximum entropy RL, the optimal policy is a Boltzmann distribution derived from the soft Q-function, but direct sampling from this distribution in continuous action spaces is computationally intractable. As a result, existing methods typically use simpler parametric distributions, like Gaussians, for policy representation - limiting their ability to capture the full complexity of multimodal action distributions. In this paper, we introduce a diffusion-based approach for sampling from energy-based policies, where the negative Q-function defines the energy function. Based on this approach, we propose an actor-critic method called Diffusion Q-Sampling (DQS) that enables more expressive policy representations, allowing stable learning in diverse environments. We show that our approach enhances exploration and captures multimodal behavior in continuous control tasks, addressing key limitations of existing methods.